10 research outputs found

    Evaluation of GPU/CPU Co-Processing Models for JPEG 2000 Packetization

    Get PDF
    With the bottom-line goal of increasing the throughput of a GPU-accelerated JPEG 2000 encoder, this paper evaluates whether the post-compression rate control and packetization routines should be carried out on the CPU or on the GPU. Three co-processing models that differ in how the workload is split among the CPU and GPU are introduced. Both routines are discussed and algorithms for executing them in parallel are presented. Experimental results for compressing a detail-rich UHD sequence to 4 bits/sample indicate speed-ups of 200x for the rate control and 100x for the packetization compared to the single-threaded implementation in the commercial Kakadu library. These two routines executed on the CPU take 4x as long as all remaining coding steps on the GPU and therefore present a bottleneck. Even if the CPU bottleneck could be avoided with multi-threading, it is still beneficial to execute all coding steps on the GPU as this minimizes the required device-to-host transfer and thereby speeds up the critical path from 17.2 fps to 19.5 fps for 4 bits/sample and to 22.4 fps for 0.16 bits/sample

    Verbesserung der Dateiverarbeitungskette in Betriebssystemen durch Nutzung der Skalierbarkeit moderner Kompressionsverfahren

    Get PDF
    锘縈otivated by the current challenges in the field of computerized processing of multimedia information, this work contributes to the field of research on data processing and file management within computer systems. It presents novel techniques that enhance existing file- and operating systems by utilizing the scalability of modern media formats. For this purpose, the compression formats JPEG 2000 and H.264 SVC will be presented with a focus on how they achieve scalability. An analysis of the limiting hard- and software components in a computer system for the application area is presented. In particular, the restrictions of the utilized storage-devices, data interfaces and file systems are laid out and workarounds to compensate the performance bottlenecks are depicted. According to the observation that compensating the defiles requires extra efforts, new solution statements utilizing scalable media are derived and examined, subsequently. The present work reveals new concepts for managing scalable media files comprising a new rights management as well as a use-case-optimized storage technique. The rights management allows for granting access to certain parts of a file by which the scalability of the media files can be exploited in a way that users get access to various variants depending on their access rights. The use-case-optimized storage technique increases the throughput of hard disk drives when the subsequent access pattern to the media content is known a-priori. In addition, enhancements for existing data workflows are proposed by taking advantage of scalable media. Based on the Substitution Strategy, where missing data from a scalable file is compensated, a real-time capable procedure for reading files is shown. Using this strategy, image-sequences comprising a video can be shown at a given frame rate without interruptions caused by insufficient throughput of the storage device or low-speed interfaces used to connect the storage device. Adapted caching-strategies facilitate an increase of images residing in cache compared to non-scalable-variants. Additionally, a concept called Parameterizable File-Access is introduced which allows users to request a certain variant of a scalable file directly from the file system by adding side-information to a virtual file name.Motiviert durch die aktuellen Herausforderungen im Bereich der computergestu虉tzten Bearbeitung vom Multimediadaten, leistet die vorliegende Arbeit einen Beitrag zum Forschungsgebiet der Datenverarbeitung und Dateiverwaltung innerhalb von Computersystemen durch neuartige Verfahren zur Nutzung skalierbarer Medien unter Verwendung vorhandener Datei- und Betriebssysteme. Hierzu werden die Kompressionsformate JPEG 2000 und H.264 SVC vorgestellt und gezeigt, wie die Eigenschaft der Skalierbarkeit innerhalb der verschiedenen Verfahren erreicht wird. Es folgt eine Analyse der limitierenden Hard- und Softwarekomponenten in einem Computersystem fu虉r das o.g. Einsatzgebiet. Ausgehend vom hohen Aufwand zur Kompensation der Leistungsengpa虉sse werden anschlie脽end neue Lo虉sungsansa虉tze unter Nutzung skalierbarer Medienformate abgeleitet, die nachfolgend untersucht werden. Die vorliegende Arbeit zeigt hierzu neue Konzepte zur Verwaltung skalierbarer Mediendaten, die durch ein neues Rechtemanagement sowie durch eine speicheradaptive Ablagestrategie abgedeckt werden. Das Rechtemanagement erlaubt die Vergabe von Zugriffsrechten auf verschiedene Abschnitte einer Datei, wodurch die Skalierbarkeit der Medien derart abgebildet werden kann, dass verschiedene Benutzer unterschiedliche Varianten einer Datei angezeigt bekommen. Die speicheradaptive Ablagestrategie erreicht Durchsatzsteigerungen der verwendeten Datentra虉ger, wenn das spa虉tere Zugriffsverhalten auf die gespeicherten Medien vorab bekannt ist. Weiter werden Verbesserungen der Verarbeitungsabla虉ufe unter Ausnutzung skalierbarer Medien gezeigt. Auf Basis der entwickelten Substitutionsmethode zur Kompensation fehlender Daten einer skalierbaren Datei wird eine echtzeitfa虉hige Einlesestrategie vorgestellt, die unzureichende Durchsatzraten von Speichermedien bzw. langsamen Schnittstellen derart kompensieren kann, dass eine unterbrechungsfreie Ausspielung von Bildsequenzen bei einer vorgegebenen Bildwiederholrate gewa虉hrleistet werden kann. Angepasste Cache-Strategien ermo虉glichen eine Steigerung der im Cache vorhaltbaren Einzelbilder im Vergleich zu nicht skalierbaren Varianten. Daru虉ber hinaus wird das Konzept eines parametrisierbaren Dateiaufrufes eingefu虉hrt, wodurch mittels Zusatzinformationen im virtuellen Dateinamen eine gewu虉nschte Variante einer skalierbaren Datei vom Datenspeicher angefragt werden kann

    Verbesserung der Dateiverarbeitungskette in Betriebssystemen durch Nutzung der Skalierbarkeit moderner Kompressionsverfahren

    Get PDF
    锘縈otivated by the current challenges in the field of computerized processing of multimedia information, this work contributes to the field of research on data processing and file management within computer systems. It presents novel techniques that enhance existing file- and operating systems by utilizing the scalability of modern media formats. For this purpose, the compression formats JPEG 2000 and H.264 SVC will be presented with a focus on how they achieve scalability. An analysis of the limiting hard- and software components in a computer system for the application area is presented. In particular, the restrictions of the utilized storage-devices, data interfaces and file systems are laid out and workarounds to compensate the performance bottlenecks are depicted. According to the observation that compensating the defiles requires extra efforts, new solution statements utilizing scalable media are derived and examined, subsequently. The present work reveals new concepts for managing scalable media files comprising a new rights management as well as a use-case-optimized storage technique. The rights management allows for granting access to certain parts of a file by which the scalability of the media files can be exploited in a way that users get access to various variants depending on their access rights. The use-case-optimized storage technique increases the throughput of hard disk drives when the subsequent access pattern to the media content is known a-priori. In addition, enhancements for existing data workflows are proposed by taking advantage of scalable media. Based on the Substitution Strategy, where missing data from a scalable file is compensated, a real-time capable procedure for reading files is shown. Using this strategy, image-sequences comprising a video can be shown at a given frame rate without interruptions caused by insufficient throughput of the storage device or low-speed interfaces used to connect the storage device. Adapted caching-strategies facilitate an increase of images residing in cache compared to non-scalable-variants. Additionally, a concept called Parameterizable File-Access is introduced which allows users to request a certain variant of a scalable file directly from the file system by adding side-information to a virtual file name.Motiviert durch die aktuellen Herausforderungen im Bereich der computergestu虉tzten Bearbeitung vom Multimediadaten, leistet die vorliegende Arbeit einen Beitrag zum Forschungsgebiet der Datenverarbeitung und Dateiverwaltung innerhalb von Computersystemen durch neuartige Verfahren zur Nutzung skalierbarer Medien unter Verwendung vorhandener Datei- und Betriebssysteme. Hierzu werden die Kompressionsformate JPEG 2000 und H.264 SVC vorgestellt und gezeigt, wie die Eigenschaft der Skalierbarkeit innerhalb der verschiedenen Verfahren erreicht wird. Es folgt eine Analyse der limitierenden Hard- und Softwarekomponenten in einem Computersystem fu虉r das o.g. Einsatzgebiet. Ausgehend vom hohen Aufwand zur Kompensation der Leistungsengpa虉sse werden anschlie脽end neue Lo虉sungsansa虉tze unter Nutzung skalierbarer Medienformate abgeleitet, die nachfolgend untersucht werden. Die vorliegende Arbeit zeigt hierzu neue Konzepte zur Verwaltung skalierbarer Mediendaten, die durch ein neues Rechtemanagement sowie durch eine speicheradaptive Ablagestrategie abgedeckt werden. Das Rechtemanagement erlaubt die Vergabe von Zugriffsrechten auf verschiedene Abschnitte einer Datei, wodurch die Skalierbarkeit der Medien derart abgebildet werden kann, dass verschiedene Benutzer unterschiedliche Varianten einer Datei angezeigt bekommen. Die speicheradaptive Ablagestrategie erreicht Durchsatzsteigerungen der verwendeten Datentra虉ger, wenn das spa虉tere Zugriffsverhalten auf die gespeicherten Medien vorab bekannt ist. Weiter werden Verbesserungen der Verarbeitungsabla虉ufe unter Ausnutzung skalierbarer Medien gezeigt. Auf Basis der entwickelten Substitutionsmethode zur Kompensation fehlender Daten einer skalierbaren Datei wird eine echtzeitfa虉hige Einlesestrategie vorgestellt, die unzureichende Durchsatzraten von Speichermedien bzw. langsamen Schnittstellen derart kompensieren kann, dass eine unterbrechungsfreie Ausspielung von Bildsequenzen bei einer vorgegebenen Bildwiederholrate gewa虉hrleistet werden kann. Angepasste Cache-Strategien ermo虉glichen eine Steigerung der im Cache vorhaltbaren Einzelbilder im Vergleich zu nicht skalierbaren Varianten. Daru虉ber hinaus wird das Konzept eines parametrisierbaren Dateiaufrufes eingefu虉hrt, wodurch mittels Zusatzinformationen im virtuellen Dateinamen eine gewu虉nschte Variante einer skalierbaren Datei vom Datenspeicher angefragt werden kann

    Movies Tags Extraction Using Deep Learning

    Get PDF
    Retrieving information from movies is becoming increasingly demanding due to the enormous amount of multimedia data generated each day. Not only it helps in efficient search, archiving and classification of movies, but is also instrumental in content censorship and recommendation systems. Extracting key information from a movie and summarizing it in a few tags which best describe the movie presents a dedicated challenge and requires an intelligent approach to automatically analyze the movie. In this paper, we formulate movies tags extraction problem as a machine learning classification problem and train a Convolution Neural Network (CNN) on a carefully constructed tag vocabulary. Our proposed technique first extracts key frames from a movie and applies the trained classifier on the key frames. The predictions from the classifier are assigned scores and are filtered based on their relative strengths to generate a compact set of most relevant key tags. We performed a rigorous subjective evaluation of our proposed technique for a wide variety of movies with different experiments. The evaluation results presented in this paper demonstrate that our proposed approach can efficiently extract the key tags of a movie with a good accuracy

    An extension of the output profile list for semi-automated quality control using IMF

    No full text
    New distribution channels for video-content arise almost daily. While generating appropriate media formats for such channels, intensive quality control is applied. Both, workflow-compositing and verification, requires plenty of undesirable human interaction. Thanks to the Interoperable Master Format (IMF), there is already a solution for defining transcoding-workflows by using the Output Profile List (OPL). An OPL can define how an IMF-Package (IMP) is converted into a particular distribution format. For automated Quality Control (QC) of these formats, the authors will present a technology demonstrator allowing for QC'ing of IMF-Packages as well as the derived distribution formats based on the OPL. Therefore, typical quality issues introduced by video transcoders are shown, before the concept of the OPL and its role within the IMF is explained. Based on those findings, the authors suggest an extension of the current IMF-OPL standard for covering (semi-) automated quality control of audio, video and auxiliary data

    GPU-friendly EBCOT variant with single-pass scan order and raw bit plane coding

    No full text
    A major drawback of JPEG 2000 is the computational complexity of its entropy coder named Embedded Block Coder with Optimized Truncation (EBCOT). This paper proposes two alterations to the original algorithm that seek to improve the trade-off between compression efficiency and throughput. Firstly, magnitude bits within a bit plane are not prioritized by their significance anymore, but instead coded in a single pass instead of three, reducing the amount of expensive memory accesses at the cost of fewer truncation points. Secondly, low bit planes can entirely bypass the arithmetic coder and thus do not require any context-modelling. Both the encoder and decoder can process such bit planes in a sample-parallel fashion. Experiments show average speed-ups of 1.6x (1.8x) for the encoder and 1.5x (1.9x) for the decoder, when beginning raw-coding after the fourth (third) bit plane, while the data rate increases only by 1.15x (1.3x)

    Parallel efficient rate control methods for JPEG 2000

    No full text
    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed

    Parallel Efficient Rate Control Methods for JPEG 2000

    No full text
    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed

    Evaluation of GPU/CPU co-processing models for JPEG 2000 packetization

    Get PDF
    With the bottom-line goal of increasing the throughput of a GPU-accelerated JPEG 2000 encoder, this paper evaluates whether the post-compression rate control and packetization routines should be carried out on the CPU or on the GPU. Three co-processing models that differ in how the workload is split among the CPU and GPU are introduced. Both routines are discussed and algorithms for executing them in parallel are presented. Experimental results for compressing a detail-rich UHD sequence to 4 bits/sample indicate speed-ups of 200脳 for the rate control and 100脳 for the packetization compared to the single-threaded implementation in the commercial Kakadu library. These two routines executed on the CPU take 4脳 as long as all remaining coding steps on the GPU and therefore present a bottleneck. Even if the CPU bottleneck could be avoided with multi-threading, it is still beneficial to execute all coding steps on the GPU as this minimizes the required device-to-host transfer and thereby speeds up the critical path from 17.2 fps to 19.5 fps for 4 bits/sample and to 22.4 fps for 0.16 bits/sample

    Movies tags extraction using deep learning

    Get PDF
    Retrieving information from movies is becoming increasingly demanding due to the enormous amount of multimedia data generated each day. Not only it helps in efficient search, archiving and classification of movies, but is also instrumental in content censorship and recommendation systems. Extracting key information from a movie and summarizing it in a few tags which best describe the movie presents a dedicated challenge and requires an intelligent approach to automatically analyze the movie. In this paper, we formulate movies tags extraction problem as a machine learning classification problem and train a Convolution Neural Network (CNN) on a carefully constructed tag vocabulary. Our proposed technique first extracts key frames from a movie and applies the trained classifier on the key frames. The predictions from the classifier are assigned scores and are filtered based on their relative strengths to generate a compact set of most relevant key tags. We performed a rigorous subjective evaluation of our proposed technique for a wide variety of movies with different experiments. The evaluation results presented in this paper demonstrate that our proposed approach can efficiently extract the key tags of a movie with a good accuracy
    corecore